Draft Report | Participation Agreement | Concept Document | API | Validation | Encryption
[last updated: March 6, 2020]
A draft FRVT Quality Assessment report is now available for public commenting. The FRVT Quality Assessment evaluation is an ongoing test that remains open to new participation. The report will be updated as new algorithms are evaluated, as new datasets are added, and as new analyses are included. Please direct comments to frvt@nist.gov.
The following plots show FNMR vs. Reject, showing how FNMR reduces when the worst quality data is thrown away. Each row corresponds to a recognition algorithm, and the lines within each panel correspond to quality assessment algorithms. A perfect quality algorithm would predict which images are implicated in false non-matches. The line labeled “PERFECT” is generated using max((FNMR - X), 0). The closer the quality algorithm is to the “PERFECT” line, the better the quality prediction performance is relative to recognition outcomes. The solid grey horizontal line represents the starting FNMR while the dotted grey horizontal line represents when the starting FNMR has been reduced by half. The point where the quality algorithm line intersects the dotted grey line indicates the fraction of lowest quality scores that need to be rejected in order to reduce FNMR by half.
Face recognition accuracy has improved markedly due to development of new recognition algorithms and approaches. Nevertheless, recognition error rates remain significantly above zero, particularly in applications where photography of faces is difficult or when stringent thresholds must be applied to recognition outcomes to reduce false positives. For those applications that retain an image as an authoritative reference sample against which future recognitions are done, it is critical to maintain database quality. To that end, quality assessment tools are applicable:
The defining properties of a face quality scalar are described in the Quality Concept Document linked above. The document targets the ISO/IEC/ICAO specifications of a full frontal face as the reference against which quality must be assessed. ISO/IEC JTC 1 SC37, the committee for standardization in biometrics is likely commencing work on face image quality assessment in mid 2019. This page will be updated to give status updates for progress there.
A number of commercial tools exist that report quality scalars and vectors. The efficacy of quality assessment algorithms is important, because they can make two kinds or error: false rejection – saying an image is poor when it is not, which drives costs; false acceptance – saying an image is good when it is not, which drives future recognition errors. Implicit in this statement is that image quality should predict recognition failure, and this gives the basis of the evaluation planned at NIST. In short, NIST will run quality assessment algorithms on large sets of images and relate their outputs to face recognition outcomes. The approach is detailed in the draft Quality Concept Document. The quality-as-predictor-of-recognition approach has been successful in the development of NFIQ, for fingerprints, which was built using machine learning to map measurements from images to recognition scores.
To participate in this evaluation, developers need to submit a participation agreement to NIST, wrap their software behind the published C++ API, run their libraries through the provided validation package (which creates a submission package), encrypt the package, and provide a download link for the encrypted submission package. More details are provided below.
All organizations who seek to participate in FRVT must sign and submit all pages of this Participation Agreement. NOTE: Organizations that have already submitted a participation agreement for FRVT Ongoing 1:1 do not need to send in a new participation agreement UNLESS the organization updates their cryptographic signing key. [last update: 2019-04-23]
A definitive API document has been published. All FRVT APIs reference the supporting FRVT General Evaluation Specifications, which includes hardware and operating system environment, software requirements, reporting, and common data structures that support the APIs. All algorithms submitted must adhere to the published C++ API. [last update: 2019-04-23]
A validation package has been published. All participants must run their software through the updated validation package prior to submission. The purpose of validation is to ensure consistent algorithm output between your execution and NIST’s execution. [last update: 2019-04-23]
All submissions must be properly encrypted and signed before transmission to NIST. This must be done according to these instructions using the FRVT Ongoing public key linked from this page. Participants must email their public key to NIST. The participant’s public key must correspond to the participant’s public-key fingerprint provided on the signed Participation Application. [last update: 2019-04-23]
Encrypted files below 20MB can be emailed to NIST at frvt@nist.gov. Encrypted files above 20MB can be provided as a download link from a generic http webserver (e.g., Google Drive). We cannot accept Dropbox links. NIST will not register, or establish any kind of membership, on the provided website. Participants can submit their algorithm(s) as soon as the signed participation agreement is sent to NIST. There is no need to wait for NIST confirmation of the received agreement. Participants must subscribe to the FRVT mailing list to receive emails when new reports are published or announcements are made. [last update: 2020-03-27]
Inquiries and comments may be submitted to frvt@nist.gov.
Subscribe to the FRVT mailing list to receive emails when announcements or updates are made.